By Loudon Blair, Ciena
Growth in broadband traffic has been predictable over the past two decades. Led by the evolution of cloud services and big data/analytics, video requirements (including streaming as it evolved from SD to HD and, most recently, 4K) and social applications, you could expect year-on-year increases of 20 to 30 percent. This meant we’ve been able to scale our infrastructure accordingly.
But a new era of unpredictable traffic growth is upon us, driven by the emergence – and soon, dominance – of artificial intelligence.
A recent global survey found that data center experts anticipate at least a sixfold increase in DCI bandwidth demand over the next five years, requiring new levels of scalability.
As generative AI goes multi-modal, AI-driven automation is increasingly adopted, AI agents and assistants fuel digital transformation, and inferencing drives a requirement for increased computing at the network edge, the sheer level of traffic required to cater to those applications is set to place enormous demands on the network.
And the first network segment to bear this burden is not connecting the user to the data – in fact, it’s the first port of call, within the data center network.
Traffic Needs Are Compounding Existing Wavelength Demands
By and large, the discussions around AI infrastructure requirements have focused on the compute power needed to enable AI workloads, as well as the energy and physical data center footprint.
But from a business value perspective, all of this is a moot point unless AI data centers are securely and sustainably connected with traffic capabilities beyond even expected demands.
While cloud computing, big data analytics and video are responsible for the bandwidth strain on data center interconnect (DCI), the recent survey referred to above found that more than half (53 percent) of global survey respondents predict AI workloads will overtake traditional cloud and big data applications within three years, if not two.
This doesn’t mean cloud, big data and video are going anywhere; they’re likely to continue to grow and adapt with AI. AI is just adding to that strain. Thus, it’s clear that more bandwidth and more speed are critical to the network broadly, but particularly for DCI.
A New Baseline of Connectivity
Not only will the expected (and, given AI’s non-linear evolution, unexpected) demand require a significant investment in data center rollouts and estates, but also in the underlying connectivity infrastructure between and within data centers.
When one looks at the network map of tomorrow, new AI data centers will be primarily built where there is abundant power available, which may be more regionally than ever before to avail of lower energy prices, renewable energy sources and lower land costs. Additionally, data centers built specifically to cater to inferencing will increasingly be built along the edge of the network, closer to end users – people and machines.
What’s needed is performance and wavelengths that are ahead of the curve, allowing for spikes in demand and traffic. To fully realize AI-enabled services and anticipated growth, network providers and vendors are taking steps to ensure the network is a powerful enabler of AI. Thankfully, over the past few years much work has gone into enabling high-bandwidth communications between data centers over hundreds and thousands of kilometers, and 1.6 Tb/s single-carrier wavelengths are now possible.
To connect AI data centers efficiently over longer distances, the above-mentioned survey shows a strong demand for high-capacity optical transport, optical spectrum expansion, and power-efficient solutions.
When asked which connectivity requirements are necessary to connect AI data centers over distances greater than 100 kilometers two years from now, high-capacity transponders (1.6 Tb/s per wavelength and higher) were identified by 55 percent of respondents as a necessary requirement.
For this to occur, coherent optical modems (as opposed to the IMDD approach currently used inside the DC over short distances) are essential for scaling bandwidth to accommodate growing AI workloads.
Coherent Modulation: The Key to Distance and Capacity
As data rates inside the data center increase to 1.6T and beyond, traditional Intensity Modulation Direct Detection (IMDD) technology is running out of steam and will be unable to propagate optical signals over the desired distances.
Coherent optical modems will help solve this challenge because they deliver a higher loss budget, can mitigate for crosstalk, and scale to higher capacity/fiber with WDM. Coherent technology offers greater spectral efficiency and is better positioned to address the new capacity demands and performance/margin required for data center architectures. And it’s not the first time coherent will save the day.
An inflection point like this one occurred a decade ago in wide area networks when the network that supported the internet moved to 100G, and at that time, existing IMDD modulation techniques could not transmit over hundreds or thousands of kilometers. Coherent modulation enabled 100G over vast distances then, and it’s continued to evolve as demand has increased, offering the technology yet another opportunity to save the day.
As AI adoption evolves, demand for extreme bandwidth capacity has become even more significant. A reshaping of the data center landscape is underway simply to keep up with expected demand. How they interconnect must not only be secure and sustainable but also enable incredible amounts of traffic to travel great distances seamlessly.
While there is no “one-size-fits-all” architecture and no off-the-shelf template that date center decision-makers can implement – steps are being taken today to prepare networks for the anticipated demands. The one constant that data center leaders can plan for is the need for performance and wavelength capabilities that outpace the current pace of AI evolution.
Loudon Blair is Senior Director of the Corporate Strategy Office of Ciena, a U.S.-based high-speed networking systems and software company.




